51 research outputs found

    Control of Complex Dynamic Systems by Neural Networks

    Get PDF
    This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations

    Mechanisms governing interannual variability of upper-ocean temperature in a global ocean hindcast simulation

    Get PDF
    Author Posting. © American Meteorological Society, 2007. This article is posted here by permission of American Meteorological Society for personal use, not for redistribution. The definitive version was published in Journal of Physical Oceanography 37 (2007): 1918-1938, doi:10.1175/jpo3089.1.The interannual variability in upper-ocean (0–400 m) temperature and governing mechanisms for the period 1968–97 are quantified from a global ocean hindcast simulation driven by atmospheric reanalysis and satellite data products. The unconstrained simulation exhibits considerable skill in replicating the observed interannual variability in vertically integrated heat content estimated from hydrographic data and monthly satellite sea surface temperature and sea surface height data. Globally, the most significant interannual variability modes arise from El Niño–Southern Oscillation and the Indian Ocean zonal mode, with substantial extension beyond the Tropics into the midlatitudes. In the well-stratified Tropics and subtropics, net annual heat storage variability is driven predominately by the convergence of the advective heat transport, mostly reflecting velocity anomalies times the mean temperature field. Vertical velocity variability is caused by remote wind forcing, and subsurface temperature anomalies are governed mostly by isopycnal displacements (heave). The dynamics at mid- to high latitudes are qualitatively different and vary regionally. Interannual temperature variability is more coherent with depth because of deep winter mixing and variations in western boundary currents and the Antarctic Circumpolar Current that span the upper thermocline. Net annual heat storage variability is forced by a mixture of local air–sea heat fluxes and the convergence of the advective heat transport, the latter resulting from both velocity and temperature anomalies. Also, density-compensated temperature changes on isopycnal surfaces (spice) are quantitatively significant.This work was supported in part from NOAA Office of Global Programs ACCP Grant NA86GP0290, NSF Grant OCE96-33681, and the WHOI Ocean and Climate Change Institute

    Abstracts of presentations on plant protection issues at the xth international congress of virology: August 11-16,1996 Binyanei haOoma, Jerusalem, Israel Part 2 Plenary Lectures

    Get PDF

    Opioid substitution and antagonist therapy trials exclude the common addiction patient: a systematic review and analysis of eligibility criteria

    Full text link

    Stochastic Optimization

    No full text
    Stochastic optimization algorithms have been growing rapidly in popularity over the last decade or two, with a number of methods now becoming “industry standard ” approaches for solving challenging optimization problems. This chapter provides a synopsis of some of the critical issues associated with stochastic optimization and a gives a summary of several popular algorithms. Much more complete discussions are available in the indicated references. To help constrain the scope of this article, we restrict our attention to methods using only measurements of the criterion (loss function). Hence, we do not cover the many stochastic methods using information such as gradients of the loss function. Section 1 discusses some general issues in stochastic optimization. Section 2 discusses random search methods, which are simple and surprisingly powerful in many applications. Section 3 discusses stochastic approximation, which is a foundational approach in stochastic optimization. Section 4 discusses a popular method that is based on connections to natural evolution—genetic algorithms. Finally, Section 5 offers some concluding remarks

    Feedback and weighting mechanisms for improving Jacobian (Hessian) estimates in the adaptive simultaneous perturbation algorithm

    No full text
    Abstract—It is known that a stochastic approximation (SA) analogue of the deterministic Newton-Raphson algorithm provides an asymptotically optimal or near-optimal form of stochastic search. However, directly determining the required Jacobian matrix (or Hessian matrix for optimization) has often been difficult or impossible in practice. This paper presents a general adaptive SA algorithm that is based on a simple method for estimating the Jacobian matrix while concurrently estimating the primary parameters of interest. Relative to prior methods for adaptively estimating the Jacobian matrix, the paper introduces two enhancements that generally improve the quality of the estimates for underlying Jacobian (Hessian) matrices, thereby improving the quality of the estimates for the primary parameters of interest. The first enhancement rests on a feedback process that uses previous Jacobian estimates to reduce the error in the current estimate. The second enhancement is based on an optimal weighting of per-iteration Jacobian estimates. From the use of simultaneous perturbations, the algorithm requires only a small number of loss function or gradient measurements per iteration—independent of the problem dimension—to adaptively estimate the Jacobian matrix and parameters of primary interest. Index Terms—Adaptive estimation, Jacobian matrix, rootfinding, simultaneous perturbation stochastic approximation (SPSA), stochastic optimization. I
    • …
    corecore